137 research outputs found

    Performance of the Tariff Method: validation of a simple additive algorithm for analysis of verbal autopsies

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Verbal autopsies provide valuable information for studying mortality patterns in populations that lack reliable vital registration data. Methods for transforming verbal autopsy results into meaningful information for health workers and policymakers, however, are often costly or complicated to use. We present a simple additive algorithm, the Tariff Method (termed Tariff), which can be used for assigning individual cause of death and for determining cause-specific mortality fractions (CSMFs) from verbal autopsy data.</p> <p>Methods</p> <p>Tariff calculates a score, or "tariff," for each cause, for each sign/symptom, across a pool of validated verbal autopsy data. The tariffs are summed for a given response pattern in a verbal autopsy, and this sum (score) provides the basis for predicting the cause of death in a dataset. We implemented this algorithm and evaluated the method's predictive ability, both in terms of chance-corrected concordance at the individual cause assignment level and in terms of CSMF accuracy at the population level. The analysis was conducted separately for adult, child, and neonatal verbal autopsies across 500 pairs of train-test validation verbal autopsy data.</p> <p>Results</p> <p>Tariff is capable of outperforming physician-certified verbal autopsy in most cases. In terms of chance-corrected concordance, the method achieves 44.5% in adults, 39% in children, and 23.9% in neonates. CSMF accuracy was 0.745 in adults, 0.709 in children, and 0.679 in neonates.</p> <p>Conclusions</p> <p>Verbal autopsies can be an efficient means of obtaining cause of death data, and Tariff provides an intuitive, reliable method for generating individual cause assignment and CSMFs. The method is transparent and flexible and can be readily implemented by users without training in statistics or computer science.</p

    Probabilistic Analysis of Facility Location on Random Shortest Path Metrics

    Get PDF
    The facility location problem is an NP-hard optimization problem. Therefore, approximation algorithms are often used to solve large instances. Such algorithms often perform much better than worst-case analysis suggests. Therefore, probabilistic analysis is a widely used tool to analyze such algorithms. Most research on probabilistic analysis of NP-hard optimization problems involving metric spaces, such as the facility location problem, has been focused on Euclidean instances, and also instances with independent (random) edge lengths, which are non-metric, have been researched. We would like to extend this knowledge to other, more general, metrics. We investigate the facility location problem using random shortest path metrics. We analyze some probabilistic properties for a simple greedy heuristic which gives a solution to the facility location problem: opening the Îș\kappa cheapest facilities (with Îș\kappa only depending on the facility opening costs). If the facility opening costs are such that Îș\kappa is not too large, then we show that this heuristic is asymptotically optimal. On the other hand, for large values of Îș\kappa, the analysis becomes more difficult, and we provide a closed-form expression as upper bound for the expected approximation ratio. In the special case where all facility opening costs are equal this closed-form expression reduces to O(ln⁥(n)4)O(\sqrt[4]{\ln(n)}) or O(1)O(1) or even 1+o(1)1+o(1) if the opening costs are sufficiently small.Comment: A preliminary version accepted to CiE 201

    Ternary Syndrome Decoding with Large Weight

    Get PDF
    The Syndrome Decoding problem is at the core of many code-based cryptosystems. In this paper, we study ternary Syndrome Decoding in large weight. This problem has been introduced in the Wave signature scheme but has never been thoroughly studied. We perform an algorithmic study of this problem which results in an update of the Wave parameters. On a more fundamental level, we show that ternary Syndrome Decoding with large weight is a really harder problem than the binary Syndrome Decoding problem, which could have several applications for the design of code-based cryptosystems

    Direct estimation of cause-specific mortality fractions from verbal autopsies: multisite validation study using clinical diagnostic gold standards

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Verbal autopsy (VA) is used to estimate the causes of death in areas with incomplete vital registration systems. The King and Lu method (KL) for direct estimation of cause-specific mortality fractions (CSMFs) from VA studies is an analysis technique that estimates CSMFs in a population without predicting individual-level cause of death as an intermediate step. In previous studies, KL has shown promise as an alternative to physician-certified verbal autopsy (PCVA). However, it has previously been impossible to validate KL with a large dataset of VAs for which the underlying cause of death is known to meet rigorous clinical diagnostic criteria.</p> <p>Methods</p> <p>We applied the KL method to adult, child, and neonatal VA datasets from the Population Health Metrics Research Consortium gold standard verbal autopsy validation study, a multisite sample of 12,542 VAs where gold standard cause of death was established using strict clinical diagnostic criteria. To emulate real-world populations with varying CSMFs, we evaluated the KL estimations for 500 different test datasets of varying cause distribution. We assessed the quality of these estimates in terms of CSMF accuracy as well as linear regression and compared this with the results of PCVA.</p> <p>Results</p> <p>KL performance is similar to PCVA in terms of CSMF accuracy, attaining values of 0.669, 0.698, and 0.795 for adult, child, and neonatal age groups, respectively, when health care experience (HCE) items were included. We found that the length of the cause list has a dramatic effect on KL estimation quality, with CSMF accuracy decreasing substantially as the length of the cause list increases. We found that KL is not reliant on HCE the way PCVA is, and without HCE, KL outperforms PCVA for all age groups.</p> <p>Conclusions</p> <p>Like all computer methods for VA analysis, KL is faster and cheaper than PCVA. Since it is a direct estimation technique, though, it does not produce individual-level predictions. KL estimates are of similar quality to PCVA and slightly better in most cases. Compared to other recently developed methods, however, KL would only be the preferred technique when the cause list is short and individual-level predictions are not needed.</p

    Chosen-ciphertext security from subset sum

    Get PDF
    We construct a public-key encryption (PKE) scheme whose security is polynomial-time equivalent to the hardness of the Subset Sum problem. Our scheme achieves the standard notion of indistinguishability against chosen-ciphertext attacks (IND-CCA) and can be used to encrypt messages of arbitrary polynomial length, improving upon a previous construction by Lyubashevsky, Palacio, and Segev (TCC 2010) which achieved only the weaker notion of semantic security (IND-CPA) and whose concrete security decreases with the length of the message being encrypted. At the core of our construction is a trapdoor technique which originates in the work of Micciancio and Peikert (Eurocrypt 2012

    Robust metrics for assessing the performance of different verbal autopsy cause assignment methods in validation studies

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Verbal autopsy (VA) is an important method for obtaining cause of death information in settings without vital registration and medical certification of causes of death. An array of methods, including physician review and computer-automated methods, have been proposed and used. Choosing the best method for VA requires the appropriate metrics for assessing performance. Currently used metrics such as sensitivity, specificity, and cause-specific mortality fraction (CSMF) errors do not provide a robust basis for comparison.</p> <p>Methods</p> <p>We use simple simulations of populations with three causes of death to demonstrate that most metrics used in VA validation studies are extremely sensitive to the CSMF composition of the test dataset. Simulations also demonstrate that an inferior method can appear to have better performance than an alternative due strictly to the CSMF composition of the test set.</p> <p>Results</p> <p>VA methods need to be evaluated across a set of test datasets with widely varying CSMF compositions. We propose two metrics for assessing the performance of a proposed VA method. For assessing how well a method does at individual cause of death assignment, we recommend the average chance-corrected concordance across causes. This metric is insensitive to the CSMF composition of the test sets and corrects for the degree to which a method will get the cause correct due strictly to chance. For the evaluation of CSMF estimation, we propose CSMF accuracy. CSMF accuracy is defined as one minus the sum of all absolute CSMF errors across causes divided by the maximum total error. It is scaled from zero to one and can generalize a method's CSMF estimation capability regardless of the number of causes. Performance of a VA method for CSMF estimation by cause can be assessed by examining the relationship across test datasets between the estimated CSMF and the true CSMF.</p> <p>Conclusions</p> <p>With an increasing range of VA methods available, it will be critical to objectively assess their performance in assigning cause of death. Chance-corrected concordance and CSMF accuracy assessed across a large number of test datasets with widely varying CSMF composition provide a robust strategy for this assessment.</p

    Revising the WHO verbal autopsy instrument to facilitate routine cause-of-death monitoring.

    Get PDF
    OBJECTIVE: Verbal autopsy (VA) is a systematic approach for determining causes of death (CoD) in populations without routine medical certification. It has mainly been used in research contexts and involved relatively lengthy interviews. Our objective here is to describe the process used to shorten, simplify, and standardise the VA process to make it feasible for application on a larger scale such as in routine civil registration and vital statistics (CRVS) systems. METHODS: A literature review of existing VA instruments was undertaken. The World Health Organization (WHO) then facilitated an international consultation process to review experiences with existing VA instruments, including those from WHO, the Demographic Evaluation of Populations and their Health in Developing Countries (INDEPTH) Network, InterVA, and the Population Health Metrics Research Consortium (PHMRC). In an expert meeting, consideration was given to formulating a workable VA CoD list [with mapping to the International Classification of Diseases and Related Health Problems, Tenth Revision (ICD-10) CoD] and to the viability and utility of existing VA interview questions, with a view to undertaking systematic simplification. FINDINGS: A revised VA CoD list was compiled enabling mapping of all ICD-10 CoD onto 62 VA cause categories, chosen on the grounds of public health significance as well as potential for ascertainment from VA. A set of 221 indicators for inclusion in the revised VA instrument was developed on the basis of accumulated experience, with appropriate skip patterns for various population sub-groups. The duration of a VA interview was reduced by about 40% with this new approach. CONCLUSIONS: The revised VA instrument resulting from this consultation process is presented here as a means of making it available for widespread use and evaluation. It is envisaged that this will be used in conjunction with automated models for assigning CoD from VA data, rather than involving physicians

    Simplified Symptom Pattern Method for verbal autopsy analysis: multisite validation study using clinical diagnostic gold standards

    Get PDF
    Background: Verbal autopsy can be a useful tool for generating cause of death data in data-sparse regions around the world. The Symptom Pattern (SP) Method is one promising approach to analyzing verbal autopsy data, but it has not been tested rigorously with gold standard diagnostic criteria. We propose a simplified version of SP and evaluate its performance using verbal autopsy data with accompanying true cause of death.Methods: We investigated specific parameters in SP's Bayesian framework that allow for its optimal performance in both assigning individual cause of death and in determining cause-specific mortality fractions. We evaluated these outcomes of the method separately for adult, child, and neonatal verbal autopsies in 500 different population constructs of verbal autopsy data to analyze its ability in various settings.Results: We determined that a modified, simpler version of Symptom Pattern (termed Simplified Symptom Pattern, or SSP) performs better than the previously-developed approach. Across 500 samples of verbal autopsy testing data, SSP achieves a median cause-specific mortality fraction accuracy of 0.710 for adults, 0.739 for children, and 0.751 for neonates. In individual cause of death assignment in the same testing environment, SSP achieves 45.8% chance-corrected concordance for adults, 51.5% for children, and 32.5% for neonates.Conclusions: The Simplified Symptom Pattern Method for verbal autopsy can yield reliable and reasonably accurate results for both individual cause of death assignment and for determining cause-specific mortality fractions. The method demonstrates that verbal autopsies coupled with SSP can be a useful tool for analyzing mortality patterns and determining individual cause of death from verbal autopsy data
    • 

    corecore